|
 |
 |
To access the contents, click the chapter and section titles.
Oracle Performance Tuning and Optimization
(Publisher: Macmillan Computer Publishing)
Author(s): Edward Whalen
ISBN: 067230886x
Publication Date: 04/01/96
Design
The main goals in designing the physical data layout are to provide balanced I/Os across all the disks that are randomly accessed and to isolate the sequential I/Os. You know that the financial system has characteristics of both the OLTP and DSS systems.
An OLTP system has heavy update activity that leads you to isolate the redo log and archive log files. You dont do this with a DSS system because the DSS doesnt have heavy redo activity (although it certainly does not hurt the performance of DSS system to have isolated log volumes).
For both OLTP-type transactions and DSS-type queries, it is important to have a sufficient number of disk drives to handle the I/O generated by these transactions. It is important to monitor the systems I/O rates to ensure that the I/O subsystem is not overloaded during peak times.
The layout for a financial system looks more like an OLTP system than a DSS system because of the log activity. A minimal configuration should look something like this:
|
Element (# of Volumes)
| Comments
|
|
System (1+)
| The system disk is used for the operating system, swap file (if applicable), user files, and Oracle binaries. If possible, mirror this disk for additional fault tolerance.
|
Redo log (1+)
| You should have at least one log volume. This volume should be made up of at least two physical disks using RAID-1. By using only one log volume, performance is degraded during archiving because the sequential nature of the log writes is disrupted.
|
Archive logs (1+)
| The number of disks needed for the archive log files is determined by the amount of data you need to archive. This data can be written to tape as necessary.
|
Data and index (1+)
| Because you always get concurrent access to disks with a disk array, it is not necessary to split the data and indexes into separate volumes. The number of disks you need for data and index is determined by the amount of random I/O your user community generates.
|
|
Both the data files and the indexes should be striped over as many disk drives as necessary to achieve optimal I/O rates on those disks. From Chapter 14, Advanced Disk I/O Concepts, remember that you can only push a disk drive to a maximum random I/O rate.
I recommend taking advantage of a disk array rather than relying on Oracle striping. A hardware disk array can provide you with the highest level of performance and optional fault tolerance, if desired.
The fact that some tables are hot and others are cold is irrelevant because the data access is uniformly distributed across all the disks. Load balancing is not an issue because the small size of the stripe ensures that random data accesses are spread out across all the disks in the array.
If you use a disk array, many of the management tasks and load balancing tasks are greatly simplified. With the disk array, you also have the option of using fault tolerance without affecting system performance. Of course, using fault tolerance requires significantly more disks.
Tuning
Tuning a financial system is much like tuning the OLTP system, with the exception of the data block size. This element has the characteristics of the DSS system. When tuning a financial system, you should analyze the system to see whether adjustments to these parameters are necessary:
- DB_BLOCK_BUFFERS Buffers are probably the most critical area in the financial system. An insufficient number of block buffers results in higher-than-normal I/O rates and possibly an I/O bottleneck. The statistics for the buffer cache are kept in the dynamic performance table V$SYSSTAT. The ratio of PHYSICAL READS to DB BLOCK GETS and CONSISTENT GETS is the cache-miss ratio. This number should be minimized.
- Library cache. Check the V$LIBRARYCACHE table, which contains statistics on how well you are using the library cache. The important columns in this table are PINS and RELOADS. A low number of reloads relative to the number of executions indicates a high cache-hit rate. You should be able to reduce the library cache misses by increasing the amount of memory available for the library cache. Do so by increasing the Oracle tunable parameter SHARED_POOL_SIZE. Try increasing the size of the shared pool by 10 percent and monitor it again. If this is not sufficient, increase the size by another 10 percent and continue.
- Open cursors. You may have to increase the number of cursors available for a session by increasing the Oracle parameter OPEN_CURSORS.
- Cursor space for time. If you have plenty of memory, you can speed access to the shared SQL areas by setting the Oracle initialization parameter CURSOR_SPACE_FOR_TIME to TRUE.
- Spin counts. By tuning the Oracle initialization parameter SPIN_COUNT. to enable spinning on resources (that is, the process spins instead of going to sleep while waiting for a resource to become available), you may see improved performance. SPIN_COUNT is useful when you have multiple CPUs and short-running transactions.
- Data dictionary cache. To check the efficiency of the data dictionary cache, look at the dynamic performance table V$ROWCACHE. The important columns to view in this table are GETS and GETMISSES. The ratio of GETMISSES to GETS should be low.
- Rollback contention. Rollback contention occurs when too many transactions try to use the same rollback segment at the same time and some of them have to wait. You can tell if you are seeing contention on rollback segments by looking at the dynamic performance table V$WAITSTAT. Check for an excessive number of UNDO HEADERs, UNDO BLOCKs, SYSTEM UNDO HEADERs, and SYSTEM UNDO BLOCKs. Compare these to the total number of requests for data. If the number is high, you need more rollback segments.
- Use many small rollback segments. Create many small rollback segments; perhaps you can make a rollback segment available for each server process.
- Create some large rollback segments. For some of the larger transactions, you may have to create some larger rollback segments. Assign these rollback segments to the longer-running transactions.
- Latch contention. Latch contention can be determined by examining the dynamic performance table V$LATCH. Look for the ratio of MISSES to GETS, the number of SLEEPS, and the ratio of IMMEDIATE_MISSES to IMMEDIATE_GETS. If the miss ratio is high, reduce the size of LOG_SMALL_ENTRY_MAX_SIZE to minimize the time any user process holds the latch; alternatively, increase the value of LOG_SIMULTANEOUS_COPIES to reduce contention by adding more redo copy latches. If neither of these solutions helps, set the initialization parameter LOG_ENTRY_PREBUILD_THRESHOLD. Any redo entry of a smaller size than this parameter must be prebuilt, which reduces the time the latch is held.
- Checkpoints. Under certain circumstances, you may have to tune checkpoints. Although this is usually not necessary, if you see severely degraded performance during checkpoints, you can reduce the effect by enabling the CKPT process. Do so by setting the Oracle initialization parameter CHECKPOINT_PROCESS to TRUE.
- Archiving. By now, you should be convinced that you should always run with archiving enabled. By adjusting the initialization parameters LOG_ARCHIVE_BUFFERS and LOG_ARCHIVE_BUFFER_SIZE, you can either slow down or speed up the performance of archiving. By speeding up archiving, the effect on the system is of a shorter duration but is more noticeable. Slowing down archiving lengthens the duration but reduces the effect. This process is described in more detail in Chapter 9, Oracle Instance Tuning.
As you can see, most of these recommendations are very similar to those given for the OLTP system (refer to Chapter 16, OLTP System). With the financial system, you can make some of the same performance enhancements as with the OLTP system; the following section reviews the possibilities.
|